Point cloud registration (PCR) is a popular research topic in computer vision. Recently, the registration method in an evolutionary way has received continuous attention because of its robustness to the initial pose and flexibility in objective function design. However, most evolving registration methods cannot tackle the local optimum well and they have rarely investigated the success ratio, which implies the probability of not falling into local optima and is closely related to the practicality of the algorithm. Evolutionary multi-task optimization (EMTO) is a widely used paradigm, which can boost exploration capability through knowledge transfer among related tasks. Inspired by this concept, this study proposes a novel evolving registration algorithm via EMTO, where the multi-task configuration is based on the idea of solution space cutting. Concretely, one task searching in cut space assists another task with complex function landscape in escaping from local optima and enhancing successful registration ratio. To reduce unnecessary computational cost, a sparse-to-dense strategy is proposed. In addition, a novel fitness function robust to various overlap rates as well as a problem-specific metric of computational cost is introduced. Compared with 7 evolving registration approaches and 4 traditional registration approaches on the object-scale and scene-scale registration datasets, experimental results demonstrate that the proposed method has superior performances in terms of precision and tackling local optima.
translated by 谷歌翻译
Empirical studies suggest that machine learning models trained with empirical risk minimization (ERM) often rely on attributes that may be spuriously correlated with the class labels. Such models typically lead to poor performance during inference for data lacking such correlations. In this work, we explicitly consider a situation where potential spurious correlations are present in the majority of training data. In contrast with existing approaches, which use the ERM model outputs to detect the samples without spurious correlations, and either heuristically upweighting or upsampling those samples; we propose the logit correction (LC) loss, a simple yet effective improvement on the softmax cross-entropy loss, to correct the sample logit. We demonstrate that minimizing the LC loss is equivalent to maximizing the group-balanced accuracy, so the proposed LC could mitigate the negative impacts of spurious correlations. Our extensive experimental results further reveal that the proposed LC loss outperforms the SoTA solutions on multiple popular benchmarks by a large margin, an average 5.5% absolute improvement, without access to spurious attribute labels. LC is also competitive with oracle methods that make use of the attribute labels. Code is available at https://github.com/shengliu66/LC.
translated by 谷歌翻译
We launch EVA, a vision-centric foundation model to explore the limits of visual representation at scale using only publicly accessible data. EVA is a vanilla ViT pre-trained to reconstruct the masked out image-text aligned vision features conditioned on visible image patches. Via this pretext task, we can efficiently scale up EVA to one billion parameters, and sets new records on a broad range of representative vision downstream tasks, such as image recognition, video action recognition, object detection, instance segmentation and semantic segmentation without heavy supervised training. Moreover, we observe quantitative changes in scaling EVA result in qualitative changes in transfer learning performance that are not present in other models. For instance, EVA takes a great leap in the challenging large vocabulary instance segmentation task: our model achieves almost the same state-of-the-art performance on LVISv1.0 dataset with over a thousand categories and COCO dataset with only eighty categories. Beyond a pure vision encoder, EVA can also serve as a vision-centric, multi-modal pivot to connect images and text. We find initializing the vision tower of a giant CLIP from EVA can greatly stabilize the training and outperform the training from scratch counterpart with much fewer samples and less compute, providing a new direction for scaling up and accelerating the costly training of multi-modal foundation models. To facilitate future research, we release all the code and models at https://github.com/baaivision/EVA.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
A diffusion auction is a market to sell commodities over a social network, where the challenge is to incentivize existing buyers to invite their neighbors in the network to join the market. Existing mechanisms have been designed to solve the challenge in various settings, aiming at desirable properties such as non-deficiency, incentive compatibility and social welfare maximization. Since the mechanisms are employed in dynamic networks with ever-changing structures, buyers could easily generate fake nodes in the network to manipulate the mechanisms for their own benefits, which is commonly known as the Sybil attack. We observe that strategic agents may gain an unfair advantage in existing mechanisms through such attacks. To resist this potential attack, we propose two diffusion auction mechanisms, the Sybil tax mechanism (STM) and the Sybil cluster mechanism (SCM), to achieve both Sybil-proofness and incentive compatibility in the single-item setting. Our proposal provides the first mechanisms to protect the interests of buyers against Sybil attacks with a mild sacrifice of social welfare and revenue.
translated by 谷歌翻译
可以通过玩游戏来训练代理商来回答困难的数学问题吗?我们考虑了整数可行性问题,这是决定线性方程和不平等系统是否具有具有整数值的解决方案的挑战。对于许多数学和计算机科学领域的应用,这是一个著名的NP完整问题。我们的论文描述了一个新颖的代数增强学习框架,该框架使代理商可以玩相当于整数可行性问题的游戏。我们解释了如何将整数可行性问题转换为具有固定保证金总和的一组阵列的游戏。游戏从初始状态(数组)开始,并采取法律举措使利润率保持不变,我们的目标是最终与零位置的零位置达到胜利状态。为了赢得比赛,玩家必须在初始状态和最终终端获胜状态之间找到一条路径。找到这样的获胜状态等同于解决整数可行性问题。关键代数成分是“基础轴向运输polyhedron的曲折理想的基础”。gr \'obner可以看作是游戏的一组连接移动(动作)。然后,我们提出了一种新型的RL方法,该方法训练代理以预测连续空间中的移动,以应对较大的动作空间。然后将连续的移动投射到一组法律移动上,以使该路径始终导致有效状态。作为概念的证明,我们在实验中证明了我们的代理商可以很好地发挥我们最简单的游戏版本,用于2向表。我们的工作突出了培训代理商通过当代机器学习方法来训练代理商玩游戏的潜力来解决非平凡的数学查询的潜力。
translated by 谷歌翻译
在视频中检测动作已被广泛应用于设备应用程序。实用的设备视频始终没有动作和背景。希望既可以识别动作类别又定位动作发生的时间位置。这样的任务称为“时间动作位置”(TAL),该位置总是在收集和标记多个未修剪视频的云上训练。希望TAL模型不断地从新数据中学习,这可以直接提高动作检测精度,同时保护客户的隐私。但是,训练TAL模型是不平凡的,因为需要具有时间注释的大量视频样本。但是,逐帧的注释视频非常耗时且昂贵。尽管已经提出了仅使用视频级标签的未修剪视频来学习弱监督的TAL(W-TAL),但这种方法也不适合在设备学习方案中。在实用的设备学习应用中,在流中收集数据。将如此长的视频流分为多个视频片段需要大量的人为努力,这阻碍了将TAL任务应用于现实的设备学习应用程序的探索。为了使W-TAL模型能够从长时间的未修剪流视频中学习,我们提出了一种有效的视频学习方法,可以直接适应新的环境。我们首先提出了一种自适应视频划分方法,采用基于对比分数的段合并方法将视频流转换为多个段。然后,我们探索TAL任务上的不同采样策略,以要求尽可能少的标签。据我们所知,我们是直接从设备的长视频流中学习的首次尝试。
translated by 谷歌翻译
面向目标的意见单词提取(TOWE)是一项精细的情感分析任务,旨在从句子中提取给定意见目标的相应意见单词。最近,深度学习方法在这项任务上取得了显着进步。然而,由于昂贵的数据注释过程,TOWE任务仍然遭受培训数据的稀缺性。有限的标记数据增加了测试数据和培训数据之间分配变化的风险。在本文中,我们建议利用大量未标记的数据来通过增加模型对变化分布变化的暴露来降低风险。具体而言,我们提出了一种新型的多透明一致性正则化(MGCR)方法,以利用未标记的数据并设计两个专门用于TOWE的过滤器,以在不同的粒度上过滤嘈杂的数据。四个TOWE基准数据集的广泛实验结果表明,与当前的最新方法相比,MGCR的优越性。深入分析还证明了不同粒度过滤器的有效性。我们的代码可在https://github.com/towessl/towessl上找到。
translated by 谷歌翻译
半监督学习(SSL)通过利用大量未标记数据来增强有限标记的样品来改善模型的概括。但是,目前,流行的SSL评估协议通常受到计算机视觉(CV)任务的约束。此外,以前的工作通常从头开始训练深层神经网络,这是耗时且环境不友好的。为了解决上述问题,我们通过从简历,自然语言处理(NLP)和音频处理(AUDIO)中选择15种不同,具有挑战性和全面的任务来构建统一的SSL基准(USB),我们会系统地评估主导的SSL方法,以及开源的一个模块化和可扩展的代码库,以对这些SSL方法进行公平评估。我们进一步为简历任务提供了最新的神经模型的预训练版本,以使成本负担得起,以进行进一步调整。 USB启用对来自多个域的更多任务的单个SSL算法的评估,但成本较低。具体而言,在单个NVIDIA V100上,仅需要37个GPU天才能在USB中评估15个任务的FIXMATCH,而335 GPU天(除ImageNet以外的4个CV数据集中的279 GPU天)在使用典型协议的5个CV任务上需要进行5个CV任务。
translated by 谷歌翻译
由路由器控制的稀疏激活模型(MOE)层的混合物(MOE)层在深度学习方面取得了巨大的成功。但是,对这种建筑的理解仍然难以捉摸。在本文中,我们正式研究了MOE层如何改善神经网络学习的性能以及为什么混合模型不会崩溃成单个模型。我们的经验结果表明,基本问题的集群结构和专家的非线性对于MOE的成功至关重要。为了进一步理解这一点,我们考虑了固有群集结构的具有挑战性的分类问题,这很难使用单个专家学习。然而,使用MOE层,通过将专家选择为两层非线性卷积神经网络(CNN),我们表明可以成功地学习问题。此外,我们的理论表明,路由器可以学习群集中心的特征,这有助于将输入复杂问题分为单个专家可以征服的更简单的线性分类子问题。据我们所知,这是正式了解MOE层的深度学习机制的第一个结果。
translated by 谷歌翻译